40 research outputs found

    Support Vector Machine classification of strong gravitational lenses

    Full text link
    The imminent advent of very large-scale optical sky surveys, such as Euclid and LSST, makes it important to find efficient ways of discovering rare objects such as strong gravitational lens systems, where a background object is multiply gravitationally imaged by a foreground mass. As well as finding the lens systems, it is important to reject false positives due to intrinsic structure in galaxies, and much work is in progress with machine learning algorithms such as neural networks in order to achieve both these aims. We present and discuss a Support Vector Machine (SVM) algorithm which makes use of a Gabor filterbank in order to provide learning criteria for separation of lenses and non-lenses, and demonstrate using blind challenges that under certain circumstances it is a particularly efficient algorithm for rejecting false positives. We compare the SVM engine with a large-scale human examination of 100000 simulated lenses in a challenge dataset, and also apply the SVM method to survey images from the Kilo-Degree Survey.Comment: Accepted by MNRA

    Multi-source Domain Adaptation via Weighted Joint Distributions Optimal Transport

    Get PDF
    This work addresses the problem of domain adaptation on an unlabeled target dataset using knowledge from multiple labelled source datasets. Most current approaches tackle this problem by searching for an embedding that is invariant across source and target domains, which corresponds to searching for a universal classifier that works well on all domains. In this paper, we address this problem from a new perspective: instead of crushing diversity of the source distributions, we exploit it to adapt better to the target distribution. Our method, named Multi-Source Domain Adaptation via Weighted Joint Distribution Optimal Transport (MSDA-WJDOT), aims at finding simultaneously an Optimal Transport-based alignment between the source and target distributions and a re-weighting of the sources distributions. We discuss the theoretical aspects of the method and propose a conceptually simple algorithm. Numerical experiments indicate that the proposed method achieves state-of-the-art performance on simulated and real datasets

    Neural network time-series classifiers for gravitational-wave searches in single-detector periods

    Full text link
    The search for gravitational-wave signals is limited by non-Gaussian transient noises that mimic astrophysical signals. Temporal coincidence between two or more detectors is used to mitigate contamination by these instrumental glitches. However, when a single detector is in operation, coincidence is impossible, and other strategies have to be used. We explore the possibility of using neural network classifiers and present the results obtained with three types of architectures: convolutional neural network, temporal convolutional network, and inception time. The last two architectures are specifically designed to process time-series data. The classifiers are trained on a month of data from the LIGO Livingston detector during the first observing run (O1) to identify data segments that include the signature of a binary black hole merger. Their performances are assessed and compared. We then apply trained classifiers to the remaining three months of O1 data, focusing specifically on single-detector times. The most promising candidate from our search is 2016-01-04 12:24:17 UTC. Although we are not able to constrain the significance of this event to the level conventionally followed in gravitational-wave searches, we show that the signal is compatible with the merger of two black holes with masses m1=50.7−8.9+10.4 M⊙m_1 = 50.7^{+10.4}_{-8.9}\,M_{\odot} and m2=24.4−9.3+20.2 M⊙m_2 = 24.4^{+20.2}_{-9.3}\,M_{\odot} at the luminosity distance of dL=564−338+812 Mpcd_L = 564^{+812}_{-338}\,\mathrm{Mpc}.Comment: 29 pages, 11 figures, submitted to CQ

    The strong gravitational lens finding challenge

    Get PDF
    Large-scale imaging surveys will increase the number of galaxy-scale strong lensing candidates by maybe three orders of magnitudes beyond the number known today. Finding these rare objects will require picking them out of at least tens of millions of images, and deriving scientific results from them will require quantifying the efficiency and bias of any search method. To achieve these objectives automated methods must be developed. Because gravitational lenses are rare objects, reducing false positives will be particularly important. We present a description and results of an open gravitational lens finding challenge. Participants were asked to classify 100 000 candidate objects as to whether they were gravitational lenses or not with the goal of developing better automated methods for finding lenses in large data sets. A variety of methods were used including visual inspection, arc and ring finders, support vector machines (SVM) and convolutional neural networks (CNN). We find that many of the methods will be easily fast enough to analyse the anticipated data flow. In test data, several methods are able to identify upwards of half the lenses after applying some thresholds on the lens characteristics such as lensed image brightness, size or contrast with the lens galaxy without making a single false-positive identification. This is significantly better than direct inspection by humans was able to do. Having multi-band, ground based data is found to be better for this purpose than single-band space based data with lower noise and higher resolution, suggesting that multi-colour data is crucial. Multi-band space based data will be superior to ground based data. The most difficult challenge for a lens finder is differentiating between rare, irregular and ring-like face-on galaxies and true gravitational lenses. The degree to which the efficiency and biases of lens finders can be quantified largely depends on the realism of the simulated data on which the finders are trained

    Influence of surface roughness on diffraction in the externally occulted Lyot solar coronagraph

    No full text
    Context. The solar coronagraph ASPIICS will fly on the future ESA formation flying mission Proba-3. The instrument combines an external occulter of diameter 1.42 m and a Lyot solar coronagraph of 5 cm diameter, located downstream at a distance of 144 m. Aims. The theoretical performance of the externally occulted Lyot coronagraph has been computed by assuming perfect optics. In this paper, we improve related modelling by introducing roughness scattering effects from the telescope. We have computed the diffraction at the detector, that we compare to the ideal case without perturbation to estimate the performance degradation. We have also investigated the influence of sizing the internal occulter and the Lyot stop, and we performed a sensitivity analysis on the roughness. Methods. We have built on a recently published numerical model of diffraction propagation. The micro-structures of the telescope are built by filtering a white noise with a power spectral density following an isotropic ABC function, suggested by Harvey scatter theory. The parameters were tuned to fit experimental data measured on ASPIICS lenses. The computed wave front error was included in the Fresnel wave propagation of the coronagraph. A circular integration over the solar disk was performed to reconstruct the complete diffraction intensity. Results. The level of micro-roughness is 1.92 nm root-mean-square. Compared to the ideal case, in the plane of the internal occulter, the diffraction peak intensity is reduced by ≃0.001%. However, the intensity outside the peak increases by 12% on average, up to 20% at 3 R⊙, where the mask does not filter out the diffraction. At detector level, the diffraction peak remains ≃10−6 at 1.1 R⊙, similar to the ideal case, but the diffraction tail at large solar radius is much higher, up to one order of magnitude. Sizing the internal occulter and the Lyot stop does not improve the rejection, as opposed to the ideal case. Conclusions. Besides these results, this paper provides a methodology to implement roughness scattering in the wave propagation model for the solar coronagraph

    A Group-Lasso Active Set Strategy for Multiclass Hyperspectral Image Classification

    Get PDF
    Hyperspectral images have a strong potential for landcover/landuse classification, since the spectra of the pixels can highlight subtle differences between materials and provide information beyond the visible spectrum. Yet, a limitation of most current approaches is the hypothesis of spatial independence between samples: images are spatially correlated and the classification map should exhibit spatial regularity. One way of integrating spatial smoothness is to augment the input spectral space with filtered versions of the bands. However, open questions remain, such as the selection of the bands to be filtered, or the filterbank to be used. In this paper, we consider the entirety of the possible spatial filters by using an incremental feature learning strategy that assesses whether a candidate feature would improve the model if added to the current input space. Our approach is based on a multiclass logistic classifier with group-lasso regularization. The optimization of this classifier yields an optimality condition, that can easily be used to assess the interest of a candidate feature without retraining the model, thus allowing drastic savings in computational time. We apply the proposed method to three challenging hyperspectral classification scenarios, including agricultural and urban data, and study both the ability of the incremental setting to learn features that always improve the model and the nature of the features selected

    Performance of the hybrid externally occulted

    No full text
    Context. High-contrast hybrid coronagraphs, which combine an external occulter and a Lyot-style coronagraph became a reality in recent years, despite the lack of analytic and numerical end-to-end performance studies. The solar coronagraph ASPIICS which will fly on the future ESA Formation Flying mission Proba-3 is a good example of such a hybrid coronograph. Aims. We aim to provide a numerical model to compute theoretical performance of the hybrid externally occulted Lyot-style coronagraph, which we then aim to compare to the performance of the classical Lyot coronagraph and the externally occulted solar coronagraph. We will provide the level and intensity distribution of the stray light, when the Sun is considered as an extended source. We also investigate the effect of different sizes for the internal occulter and Lyot stop. Methods. First, we have built on a recently published approach, to express the diffracted wave front from Fresnel diffraction produced by an external occulter at the entrance aperture of the coronagraph. Second, we computed the coherent propagation of the wave front coming from a given point of the Sun through the instrument. This is performed in three steps: from the aperture to the image of the external occulter, where the internal occulter is set, from this plane to the image of the entrance aperture, where the Lyot stop is set, and from there to the final image plane. Making use of the axis-symmetry, we considered wave fronts originating from one radius of the Sun and we circularly average the intensities. Our numerical computation used the parameters of ASPIICS. Results. The hybrid externally occulted Lyot coronagraph rejects sunlight below 10-8B⊙ from 1.3 R⊙ – in the particular configuration of ASPIICS. The Lyot coronagraph effectively complements the external occultation. We show that reducing the Lyot stop allows a clear gain in rejection, being even better than oversizing the internal occulter, that tends to exclude observations very close to the solar limb. As an illustration, we provide a graph that allows us to estimate performance as a function of the internal occulter and Lyot stop sizes. Conclusions. Our work consists of a methodological approach to compute the end-to-end performance for solar coronagraph

    Computer-aided diagnostic system for prostate cancer detection and characterization combining learned dictionaries and supervised classification

    No full text
    International audienceThis paper aims at presenting results of a computer-aided diagnostic (CAD) system for voxel based detection and characterization of prostate cancer in the peripheral zone based on multiparametric magnetic resonance (mp-MR) imaging. We propose an original scheme with the combination of a feature extraction step based on a sparse dictionary learning (DL) method and a supervised classification in order to discriminate normal {N}, normal but suspect {NS} tissues as well as different classes of cancer tissue whose aggressiveness is characterized by the Gleason score ranging from 6 {GL6} to 9 {GL9}. We compare the classification performance of two supervised methods, the linear support vector machine (SVM) and the logistic regression (LR) classifiers in a binary classification task. Classification performances were evaluated over an mp-MR image database of 35 patients where each voxel was labeled, based on a ground truth, by an expert radiologist. Results show that the proposed method in addition to being explicable thanks to the sparse representation of the voxels compares well (AUC>0.8) with recent state-of-the-art performances. Preliminary visual analysis of example patient cancer probability maps indicate that cancer probabilities tend to increase as a function of the Gleason score

    Preface

    No full text
    International audienc

    Mathematical Tools for Instrumentation & Signal Processing in Astronomy

    No full text
    International audienceNot Availabl
    corecore